7 research outputs found

    Complex Event Recognition from Images with Few Training Examples

    Full text link
    We propose to leverage concept-level representations for complex event recognition in photographs given limited training examples. We introduce a novel framework to discover event concept attributes from the web and use that to extract semantic features from images and classify them into social event categories with few training examples. Discovered concepts include a variety of objects, scenes, actions and event sub-types, leading to a discriminative and compact representation for event images. Web images are obtained for each discovered event concept and we use (pretrained) CNN features to train concept classifiers. Extensive experiments on challenging event datasets demonstrate that our proposed method outperforms several baselines using deep CNN features directly in classifying images into events with limited training examples. We also demonstrate that our method achieves the best overall accuracy on a dataset with unseen event categories using a single training example.Comment: Accepted to Winter Applications of Computer Vision (WACV'17

    Leveraging mid-level representations for complex activity recognition

    Get PDF
    Dynamic scene understanding requires learning representations of the components of the scene including objects, environments, actions and events. Complex activity recognition from images and videos requires annotating large datasets with action labels which is a tedious and expensive task. Thus, there is a need to design a mid-level or intermediate feature representation which does not require millions of labels, yet is able to generalize to semantic-level recognition of activities in visual data. This thesis makes three contributions in this regard. First, we propose an event concept-based intermediate representation which learns concepts via the Web and uses this representation to identify events even with a single labeled example. To demonstrate the strength of the proposed approaches, we contribute two diverse social event datasets to the community. We then present a use case of event concepts as a mid-level representation that generalizes to sentiment recognition in diverse social event images. Second, we propose to train Generative Adversarial Networks (GANs) with video frames (which does not require labels), use the trained discriminator from GANs as an intermediate representation and finetune it on a smaller labeled video activity dataset to recognize actions in videos. This unsupervised pre-training step avoids any manual feature engineering, video frame encoding or searching for the best video frame sampling technique. Our third contribution is a self-supervised learning approach on videos that exploits both spatial and temporal coherency to learn feature representations on video data without any supervision. We demonstrate the transfer learning capability of this model on smaller labeled datasets. We present comprehensive experimental analysis on the self-supervised model to provide insights into the unsupervised pretraining paradigm and how it can help with activity recognition on target datasets which the model has never seen during training.Ph.D

    Clustering Social Event Images using Kernel Canonical Correlation Analysis

    Get PDF
    © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.This CVPR2014 Workshop paper is the Open Access version, provided by the Computer Vision Foundation. The authoritative version of this paper is available in IEEE Xplore.DOI: 10.1109/CVPRW.2014.124Sharing user experiences in form of photographs, tweets, text, audio and/or video has become commonplace in social networking websites. Browsing through large collections of social multimedia remains a cumbersome task. It requires a user to initiate textual search query and manually go through a list of resulting images to find relevant information. We propose an automatic clustering algorithm, which, given a large collection of images, groups them into clusters of different events using the image features and related metadata. We formulate this problem as a kernel canonical correlation clustering problem in which data samples from different modalities or ‘views’ are projected to a space where correlations between the samples’ projections are maximized. Our approach enables us to learn a semantic representation of potentially uncorrelated feature sets and this representation is clustered to give unique social events. Furthermore, we leverage the rich information associated with each uploaded image (such as usernames, dates/timestamps, etc.) and empirically determine which combination of feature sets yields the best clustering score for a dataset of 100,000 images

    Towards Using Visual Attributes to Infer Image Sentiment Of Social Events

    Get PDF
    © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”DOI: 10.1109/IJCNN.2017.79660132017 International Joint Conference on Neural Networks (IJCNN), May 14-19, 2017, Anchorage, AK.Widespread and pervasive adoption of smartphones has led to instant sharing of photographs that capture events ranging from mundane to life-altering happenings. We propose to capture sentiment information of such social event images leveraging their visual content. Our method extracts an intermediate visual representation of social event images based on the visual attributes that occur in the images going beyond sentiment-specific attributes. We map the top predicted attributes to sentiments and extract the dominant emotion associated with a picture of a social event. Unlike recent approaches, our method generalizes to a variety of social events and even to unseen events, which are not available at training time. We demonstrate the effectiveness of our approach on a challenging social event image dataset and our method outperforms state-of-the-art approaches for classifying complex event images into sentiments

    Self-medication practices in medical students during the COVID-19 pandemic: A cross-sectional analysis

    No full text
    Background and objectives: During the pandemic, the growing influence of social media, accessibility of over-the-counter medications, and fear of contracting the virus may have led to self-medication practices among the general public. Medical students are prone to such practices due to relevant background knowledge, and access to drugs. This study was carried out to determine and analyze the prevalence of self-medication practices among medical students in Pakistan. Materials and methods: This descriptive, cross-sectional study was conducted online in which the participants were asked about the general demographics, their self-medication practices and the reasons to use. All participants were currently enrolled in a medical college pursuing medical or pharmacy degree. Non-probability sampling technique was used to recruit participants. Results: A total of 489 respondents were included in the final analysis. The response rate was 61%. Majority of the respondents were females and 18-20 years of age. Self-medication was quite prevalent in our study population with 406 out of 489 individuals (83.0%) were using any of the drugs since the start of pandemic. The most commonly utilized medications were Paracetamol (65.2%) and multivitamins (56.0%). The reasons reported for usage of these medications included cold/flu, or preventive measures for COVID-19. The common symptoms reported for self-medication included fever (67.9%), muscle pain (54.0%), fatigue (51.7%), sore throat (46.6%), and cough (44.4%). Paracetamol was the most commonly used drug for all symptoms. Female gender, being in 3rd year of medical studies, and individuals with good self-reported health were found more frequent users of self-medication practices. Conclusion: Our study revealed common self-medication practices among medical and pharmacy students. It is a significant health issue especially during the pandemic times, with high consumption reported as a prevention or treating symptoms of COVID-19
    corecore